In this paper, we propose a diffusion-based face swapping framework for the first time, called DiffFace, composed of training ID conditional DDPM, sampling with facial guidance, and a target-preserving blending. In specific, in the training process, the ID conditional DDPM is trained to generate face images with the desired identity. In the sampling process, we use the off-the-shelf facial expert models to make the model transfer source identity while preserving target attributes faithfully. During this process, to preserve the background of the target image and obtain the desired face swapping result, we additionally propose a target-preserving blending strategy. It helps our model to keep the attributes of the target face from noise while transferring the source facial identity. In addition, without any re-training, our model can flexibly apply additional facial guidance and adaptively control the ID-attributes trade-off to achieve the desired results. To the best of our knowledge, this is the first approach that applies the diffusion model in face swapping task. Compared with previous GAN-based approaches, by taking advantage of the diffusion model for the face swapping task, DiffFace achieves better benefits such as training stability, high fidelity, diversity of the samples, and controllability. Extensive experiments show that our DiffFace is comparable or superior to the state-of-the-art methods on several standard face swapping benchmarks.
translated by 谷歌翻译
基于神经网络驱动的相互信息(MI)界限,在许多机器学习领域取得了显着进展。但是,由于其实际和数学局限性,利用常规MI的损失通常是具有挑战性的。在这项工作中,我们首先确定其不稳定性背后的症状:(1)即使损失似乎收敛后,神经网络也不会融合,并且(2)饱和神经网络输出导致损失分歧。我们通过在现有损失中添加一个新颖的正规化术语来减轻这两个问题。我们从理论上和实验上证明了添加正规化稳定训练。最后,我们提出了一种新颖的基准测试,该基准评估了MI估计功率及其在下游任务上的能力上的基于MI的损失,紧密遵循先前存在的监督和对比度学习环境。我们在多个基准上评估了六个不同的基于MI的损失及其正规化的损失,以表明我们的方法简单而有效。
translated by 谷歌翻译
社交媒体平台难以通过内容审核来保护用户免受有害内容的影响。这些平台最近利用机器学习模型来应对每天大量的用户生成内容。由于节制政策因国家和产品类型而异,因此每项政策训练和部署模型是很常见的。但是,这种方法效率很低,尤其是当策略发生变化时,需要在移动的数据分布上重新标记并重新训练数据集。为了减轻这种成本降低,社交媒体平台经常采用第三方内容审核服务,这些服务提供了多个子任务的预测分数,例如预测未成年人,粗鲁的手势或武器的存在,而不是直接提供最终的调节决策。但是,还没有广泛探索从多个子任务的预测分数中做出可靠的自动审核决策。在这项研究中,我们制定了内容节制的现实情况,并引入了一种简单而有效的阈值优化方法,该方法搜索了多个子任务的最佳阈值,以以具有成本效益的方式做出可靠的适度决策。广泛的实验表明,与现有的阈值优化方法和启发式方法相比,我们的方法在内容节制中表现出更好的性能。
translated by 谷歌翻译
多语言语音数据通常会遭受长尾语的语言分布,从而导致性能退化。但是,多语言文本数据更容易获得,从而产生了更有用的通用语言模型。因此,我们有动力将嵌入在训练有素的教师文本模型中的丰富知识提炼成学生的演讲模型。我们提出了一种称为语言模型(Distill-L2S)的新方法,称为语言模型,该模型将两种不同模式的潜在表示一致。微妙的差异是通过收缩机制,最近的邻居插值和可学习的线性投影层来处理的。我们通过将其应用于多语言自动语音识别(ASR)任务来证明我们的蒸馏方法的有效性。我们在微调每种语言的大规模多语言ASR模型(XLSR-WAV2VEC 2.0)的同时,将基于变压器的跨语言语言模型(Infoxlm)提炼出来。我们显示了我们的方法对公共视觉数据集的20种低资源语言的优势,其语音数据少于100小时。
translated by 谷歌翻译
最近关于使用嘈杂标签的学习的研究通过利用小型干净数据集来显示出色的性能。特别是,基于模型不可知的元学习的标签校正方法进一步提高了性能,通过纠正了嘈杂的标签。但是,标签错误矫予没有保障措施,导致不可避免的性能下降。此外,每个训练步骤都需要至少三个背部传播,显着减慢训练速度。为了缓解这些问题,我们提出了一种强大而有效的方法,可以在飞行中学习标签转换矩阵。采用转换矩阵使分类器对所有校正样本持怀疑态度,这减轻了错误的错误问题。我们还介绍了一个双头架构,以便在单个反向传播中有效地估计标签转换矩阵,使得估计的矩阵紧密地遵循由标签校正引起的移位噪声分布。广泛的实验表明,我们的方法在训练效率方面表现出比现有方法相当或更好的准确性。
translated by 谷歌翻译
The 3D-aware image synthesis focuses on conserving spatial consistency besides generating high-resolution images with fine details. Recently, Neural Radiance Field (NeRF) has been introduced for synthesizing novel views with low computational cost and superior performance. While several works investigate a generative NeRF and show remarkable achievement, they cannot handle conditional and continuous feature manipulation in the generation procedure. In this work, we introduce a novel model, called Class-Continuous Conditional Generative NeRF ($\text{C}^{3}$G-NeRF), which can synthesize conditionally manipulated photorealistic 3D-consistent images by projecting conditional features to the generator and the discriminator. The proposed $\text{C}^{3}$G-NeRF is evaluated with three image datasets, AFHQ, CelebA, and Cars. As a result, our model shows strong 3D-consistency with fine details and smooth interpolation in conditional feature manipulation. For instance, $\text{C}^{3}$G-NeRF exhibits a Fr\'echet Inception Distance (FID) of 7.64 in 3D-aware face image synthesis with a $\text{128}^{2}$ resolution. Additionally, we provide FIDs of generated 3D-aware images of each class of the datasets as it is possible to synthesize class-conditional images with $\text{C}^{3}$G-NeRF.
translated by 谷歌翻译
Cellular automata (CA) captivate researchers due to teh emergent, complex individualized behavior that simple global rules of interaction enact. Recent advances in the field have combined CA with convolutional neural networks to achieve self-regenerating images. This new branch of CA is called neural cellular automata [1]. The goal of this project is to use the idea of idea of neural cellular automata to grow prediction machines. We place many different convolutional neural networks in a grid. Each conv net cell outputs a prediction of what the next state will be, and minimizes predictive error. Cells received their neighbors' colors and fitnesses as input. Each cell's fitness score described how accurate its predictions were. Cells could also move to explore their environment and some stochasticity was applied to movement.
translated by 谷歌翻译
There is a dramatic shortage of skilled labor for modern vineyards. The Vinum project is developing a mobile robotic solution to autonomously navigate through vineyards for winter grapevine pruning. This necessitates an autonomous navigation stack for the robot pruning a vineyard. The Vinum project is using the quadruped robot HyQReal. This paper introduces an architecture for a quadruped robot to autonomously move through a vineyard by identifying and approaching grapevines for pruning. The higher level control is a state machine switching between searching for destination positions, autonomously navigating towards those locations, and stopping for the robot to complete a task. The destination points are determined by identifying grapevine trunks using instance segmentation from a Mask Region-Based Convolutional Neural Network (Mask-RCNN). These detections are sent through a filter to avoid redundancy and remove noisy detections. The combination of these features is the basis for the proposed architecture.
translated by 谷歌翻译
Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinforcement learning (RL), but we explore a simpler approach of greedily selecting features based on their conditional mutual information. This method is theoretically appealing but requires oracle access to the data distribution, so we develop a learning approach based on amortized optimization. The proposed method is shown to recover the greedy policy when trained to optimality and outperforms numerous existing feature selection methods in our experiments, thus validating it as a simple but powerful approach for this problem.
translated by 谷歌翻译
In this paper, we learn a diffusion model to generate 3D data on a scene-scale. Specifically, our model crafts a 3D scene consisting of multiple objects, while recent diffusion research has focused on a single object. To realize our goal, we represent a scene with discrete class labels, i.e., categorical distribution, to assign multiple objects into semantic categories. Thus, we extend discrete diffusion models to learn scene-scale categorical distributions. In addition, we validate that a latent diffusion model can reduce computation costs for training and deploying. To the best of our knowledge, our work is the first to apply discrete and latent diffusion for 3D categorical data on a scene-scale. We further propose to perform semantic scene completion (SSC) by learning a conditional distribution using our diffusion model, where the condition is a partial observation in a sparse point cloud. In experiments, we empirically show that our diffusion models not only generate reasonable scenes, but also perform the scene completion task better than a discriminative model. Our code and models are available at https://github.com/zoomin-lee/scene-scale-diffusion
translated by 谷歌翻译